255 research outputs found

    Prediction of unsupported excavations behaviour with machine learning techniques

    Get PDF
    Artificial intelligence and machine learning algorithms have known an increasing interest from the research community, triggering new applications and services in many domains. In geotechnical engineering, for instance, neural networks have been used to benefit from information gained at a given site in order to extract relevant constitutive soil information from field measurements [1]. The goal of this work is to use machine (supervised) learning techniques in order to predict the behaviour of a sheet pile wall excavation, minimizing a loss function that maps the input (excavation’s depth, soil’s characteristics, wall’s stiffness) to a predicted output (wall’s deflection, soil’s settlement, wall’s bending moment). Neural networks are used to do this supervised learning. A neural network is composed of neurons which apply a mathematical function on their input (see Figure 1, left) and synapses which take the output of one neuron to the input of another one. For our purpose, neural networks can be understood as a set of nonlinear functions which can be fitted to data by changing their parameters. In this work, a simple class of neural networks, called Multi-Layer Perceptron (MLP) are used. They are composed of an input layer of neurons, an output layer, and one or several middle layers (hidden layers) (see Figure 1, right). A neural network learns by adjusting the weights and biases in order to minimize a certain loss function (for instance: the mean squared error) between the desired and the predicted output. Stochastic gradient descent or one of its variations are used to adjust the parameters and the gradients are obtained through backpropagation (an efficient application of the chain rule). The interest in neural networks comes from the fact that they are universal function estimators, in the sense that they can approximate any continuous function to any precision given enough neurons. However, this can lead to over-fitting problems where the network learns the noise in the data, or worse, where they memorize by rote each sample [2]

    Multilevel parallelism in sequence alignment using a streaming approach

    Get PDF
    Proceedings of: Second International Workshop on Sustainable Ultrascale Computing Systems (NESUS 2015). Krakow (Poland), September 10-11, 2015.Ultrascale computing and bioinformatics are two rapidly growing fields with a big impact right now and even more so in the future. The introduction of next generation sequencing pushes current bioinformatics tools and workflows to their limits in terms of performance. This forces the tools to become increasingly performant to keep up with the growing speed at which sequencing data is created. Ultrascale computing can greatly benefit bioinformatics in the challenges it faces today, especially in terms of scalability, data management and reliability. But before this is possible, the algorithms and software used in the field of bioinformatics need to be prepared to be used in a heterogeneous distributed environment. For this paper we choose to look at sequence alignment, which has been an active topic of research to speed up next generation sequence analysis, as it is ideally suited for parallel processing. We present a multilevel stream based parallel architecture to transparently distribute sequence alignment over multiple cores of the same machine, multiple machines and cloud resources. The same concepts are used to achieve multithreaded and distributed parallelism, making the architecture simple to extend and adapt to new situations. A prototype of the architecture has been implemented using an existing commercial sequence aligner. We demonstrate the flexibility of the implementation by running it on different configurations, combining local and cloud computing resources

    FriendComputing: Organic application centric distributed computing

    Get PDF
    Proceedings of: Second International Workshop on Sustainable Ultrascale Computing Systems (NESUS 2015). Krakow (Poland), September 10-11, 2015.Building Ultrascale computer systems is a hard problem, not yet solved and fully explored. Combining the computing resources of multiple organizations, often in different administrative domains with heterogeneous hardware and diverse demands on the system, requires new tools and frameworks to be put in place. During previous work we developed POP-Java, a Java programming language extension that allows to easily develop distributed applications in a heterogeneous environment. We now present an extension to the POP-Java language, that allows to create application centered networks in which any member can benefit from the computing power and storage capacity of its members. An accounting system is integrated, allowing the different members of the network to bill the usage of their resources to the other members, if so desired. The system is expanded through a similar process as seen in social networks, making it possible to use the resources of friend and friends of friends. Parts of the proposed system has been implemented as a prototype inside the POP-Java programming language

    Combinatorial Optimization Algorithms for Radio Network Planning

    No full text
    An extended version of this paper was published in the Journal of Theoretical Computer Science (TCS), Special Issue on Combinatorics and Computer Science, 265(1):235-245, 2001. See http://hal.archives-ouvertes.fr/hal-00346034/fr/International audienc

    Combinatorial Optimization Algorithms for Radio Network Planning

    Get PDF
    Special Issue on Combinatorics and Computer ScienceInternational audienceThis paper uses a realistic problem taken from the telecommunication world as the basis for comparing different combinatorial optimization algorithms. The problem recalls the minimum hitting set problem, and is solved with greedy-like, Darwinism and genetic algorithms. These three paradigms are described and analyzed with emphasis on the Darwinism approach, which is based on the computation of epsilon-nets

    2013 Doctoral Workshop on Distributed Systems

    Get PDF
    The Doctoral Workshop on Distributed Systems was held at Les Plans-sur-Bex, Switzerland, from June 26-28, 2013. Ph.D. students from the Universities of NeuchĂątel and Bern as well as the University of Applied Sciences of Fribourg presented their current research work and discussed recent research results. This technical report includes the extended abstracts of the talks given during the workshop

    Enhancing user fairness in OFDMA radio access networks through machine learning

    Get PDF
    The problem of radio resource scheduling subject to fairness satisfaction is very challenging even in future radio access networks. Standard fairness criteria aim to find the best trade-off between overall throughput maximization and user fairness satisfaction under various types of network conditions. However, at the Radio Resource Management (RRM) level, the existing schedulers are rather static being unable to react according to the momentary networking conditions so that the user fairness measure is maximized all time. This paper proposes a dynamic scheduler framework able to parameterize the proportional fair scheduling rule at each Transmission Time Interval (TTI) to improve the user fairness. To deal with the framework complexity, the parameterization decisions are approximated by using the neural networks as non-linear functions. The actor-critic Reinforcement Learning (RL) algorithm is used to learn the best set of non-linear functions that approximate the best fairness parameters to be applied in each momentary state. Simulations results reveal that the proposed framework outperforms the existing fairness adaptation techniques as well as other types of RL-based schedulers

    Radio Network Planning with Combinatorial Optimization Algorithms

    Get PDF
    International audienceFuture UMTS radio planning engineers will face difficult problems due to the complexity of the system and the size of these networks. In the STORMS project, a software for the optimisation of the radio network is under development. Two mathematical models of the radio planning problem are proposed. Two software prototypes based on these models are described with the first experimental and comparative results

    A comparison of reinforcement learning algorithms in fairness-oriented OFDMA schedulers

    Get PDF
    Due to large-scale control problems in 5G access networks, the complexity of radioresource management is expected to increase significantly. Reinforcement learning is seen as apromising solution that can enable intelligent decision-making and reduce the complexity of differentoptimization problems for radio resource management. The packet scheduler is an importantentity of radio resource management that allocates users’ data packets in the frequency domainaccording to the implemented scheduling rule. In this context, by making use of reinforcementlearning, we could actually determine, in each state, the most suitable scheduling rule to be employedthat could improve the quality of service provisioning. In this paper, we propose a reinforcementlearning-based framework to solve scheduling problems with the main focus on meeting the userfairness requirements. This framework makes use of feed forward neural networks to map momentarystates to proper parameterization decisions for the proportional fair scheduler. The simulation resultsshow that our reinforcement learning framework outperforms the conventional adaptive schedulersoriented on fairness objective. Discussions are also raised to determine the best reinforcement learningalgorithm to be implemented in the proposed framework based on various scheduler settings

    Les droits disciplinaires des fonctions publiques : « unification », « harmonisation » ou « distanciation ». A propos de la loi du 26 avril 2016 relative à la déontologie et aux droits et obligations des fonctionnaires

    Get PDF
    The production of tt‟ , W+bb‟ and W+cc‟ is studied in the forward region of proton–proton collisions collected at a centre-of-mass energy of 8 TeV by the LHCb experiment, corresponding to an integrated luminosity of 1.98±0.02 fb−1 . The W bosons are reconstructed in the decays W→ℓΜ , where ℓ denotes muon or electron, while the b and c quarks are reconstructed as jets. All measured cross-sections are in agreement with next-to-leading-order Standard Model predictions.The production of tt‟t\overline{t}, W+bb‟W+b\overline{b} and W+cc‟W+c\overline{c} is studied in the forward region of proton-proton collisions collected at a centre-of-mass energy of 8 TeV by the LHCb experiment, corresponding to an integrated luminosity of 1.98 ±\pm 0.02 \mbox{fb}^{-1}. The WW bosons are reconstructed in the decays W→ℓΜW\rightarrow\ell\nu, where ℓ\ell denotes muon or electron, while the bb and cc quarks are reconstructed as jets. All measured cross-sections are in agreement with next-to-leading-order Standard Model predictions
    • 

    corecore